# Chinese Enhancement
Ling Lite 1.5
MIT
Ling is a large-scale Mixture of Experts (MoE) language model open-sourced by InclusionAI. The Lite version features 16.8 billion total parameters with 2.75 billion activated parameters, demonstrating exceptional performance.
Large Language Model
Transformers

L
inclusionAI
46
3
Smoothie Qwen3 14B
Apache-2.0
Lightweight tuning tool that enhances multilingual generation balance by smoothing token probability distributions in Qwen and similar models
Large Language Model
Transformers English

S
dnotitia
2,518
4
Smoothie Qwen3 8B
Apache-2.0
Smoothie Qwen is a lightweight fine-tuning tool that significantly improves the balance of multilingual generation by smoothing the token probability distribution of Qwen and similar models.
Large Language Model
Transformers English

S
dnotitia
267
8
Smoothie Qwen3 32B
Apache-2.0
Smoothie Qwen is a lightweight tuning tool that significantly improves the balance of multilingual generation by smoothing the token probability distribution of Qwen and similar models.
Large Language Model
Transformers English

S
dnotitia
1,335
4
GLM 4 32B 0414 Unsloth Bnb 4bit
MIT
GLM-4-32B-0414 is a new member of the GLM family, featuring 32 billion parameters, with performance comparable to the GPT and DeepSeek series, and supports local deployment.
Large Language Model
Transformers Supports Multiple Languages

G
unsloth
87
2
GLM 4 32B 0414 GGUF
MIT
GLM-4-32B-0414 is a large language model with 32 billion parameters, comparable in performance to GPT-4o and DeepSeek-V3. It supports both Chinese and English, and excels in code generation, function calling, and complex task processing.
Large Language Model Supports Multiple Languages
G
unsloth
4,680
10
360zhinao3 7B O1.5
Apache-2.0
360 Zhinao 3-7B-O1.5 is a long chain-of-thought model open-sourced by Qihoo 360, fine-tuned based on 360 Zhinao 3-7B-Instruct, supporting complex reasoning tasks.
Large Language Model
Transformers Supports Multiple Languages

3
qihoo360
35
3
Qwen2.5 14B DeepSeek R1 1M Uncensored
This is a 14B-parameter large language model based on Qwen2.5-14B-DeepSeek-R1-1M, fused with DeepSeek-R1-Distill-Qwen-14B-abliterated-v2 using the TIES method
Large Language Model
Transformers

Q
FiditeNemini
154
6
Tinyllama V1.1 Math Code
Apache-2.0
TinyLlama is a compact language model with 1.1 billion parameters, adopting the same architecture and tokenizer as Llama 2, suitable for applications with limited computational and memory resources.
Large Language Model
Transformers English

T
TinyLlama
3,436
11
Xwin LM 13B V0.2
Xwin-LM is a large language model alignment technology developed based on Llama2, demonstrating outstanding performance in the AlpacaEval benchmark
Large Language Model
Transformers

X
Xwin-LM
713
51
Chinese Llama 2 1.3b
Apache-2.0
Chinese-LLaMA-2-1.3B is a Chinese foundational model based on Meta's released Llama-2 model, expanded with a Chinese vocabulary and pre-trained in Chinese to enhance basic semantic understanding capabilities in Chinese.
Large Language Model
Transformers Supports Multiple Languages

C
hfl
1,074
19
Anima33b Merged
Other
The first open-source 33B Chinese large language model based on QLoRA, fine-tuned for enhanced Chinese capabilities on the Guanaco 33B foundation
Large Language Model
Transformers Chinese

A
lyogavin
52
30
Bert Seg V2
Apache-2.0
This is an open-source model based on the Apache-2.0 license, with specific functionalities to be determined by the actual model type
Large Language Model
Transformers

B
simonnedved
20
0
Featured Recommended AI Models